神经场通过将坐标输入映射到采样值来模型信号。从视觉,图形到生物学和天文学的许多领域,它们正成为越来越重要的主链体系结构。在本文中,我们探讨了这些网络中常见的调理机制之间的差异,这是将神经场从信号的记忆转移到概括的基本要素,其中共同建模了位于歧管上的一组信号。特别是,我们对这些机制的缩放行为感兴趣,以对日益高维的调理变量感兴趣。正如我们在实验中显示的那样,高维条件是建模复杂数据分布的关键,因此,确定哪种体系结构在处理此类问题时最能实现哪种选择。为此,我们运行了使用串联,超网络和基于注意力的调理策略对2D,3D和4D信号进行建模的实验,这是文献中尚未进行的必要但费力的努力。我们发现,基于注意力的条件在各种环境中的其他方法都优于其他方法。
translated by 谷歌翻译
我们提出了连续重复的退火流传输蒙特卡洛(CRAFT),该方法结合了顺序的蒙特卡洛(SMC)采样器(本身是退火重要性采样的概括)与使用归一化流量的变异推断。直接训练了归一化的流量,可用于使用KL差异进行每个过渡,以在退火温度之间运输。使用归一化流/SMC近似值估算了此优化目标。我们从概念上展示并使用多个经验示例,这些示例可以改善退火流运输蒙特卡洛(Arbel等,2021),并在其上建造,也可以在基于马尔可夫链蒙特卡洛(MCMC)基于基于的随机归一化流(Wu等人。2020)。通过将工艺纳入粒子MCMC中,我们表明,这种学识渊博的采样器可以在具有挑战性的晶格场理论示例中获得令人印象深刻的准确结果。
translated by 谷歌翻译
Research on automated essay scoring has become increasing important because it serves as a method for evaluating students' written-responses at scale. Scalable methods for scoring written responses are needed as students migrate to online learning environments resulting in the need to evaluate large numbers of written-response assessments. The purpose of this study is to describe and evaluate three active learning methods than can be used to minimize the number of essays that must be scored by human raters while still providing the data needed to train a modern automated essay scoring system. The three active learning methods are the uncertainty-based, the topological-based, and the hybrid method. These three methods were used to select essays included as part of the Automated Student Assessment Prize competition that were then classified using a scoring model that was training with the bidirectional encoder representations from transformer language model. All three active learning methods produced strong results, with the topological-based method producing the most efficient classification. Growth rate accuracy was also evaluated. The active learning methods produced different levels of efficiency under different sample size allocations but, overall, all three methods were highly efficient and produced classifications that were similar to one another.
translated by 谷歌翻译
Strategic test allocation plays a major role in the control of both emerging and existing pandemics (e.g., COVID-19, HIV). Widespread testing supports effective epidemic control by (1) reducing transmission via identifying cases, and (2) tracking outbreak dynamics to inform targeted interventions. However, infectious disease surveillance presents unique statistical challenges. For instance, the true outcome of interest - one's positive infectious status, is often a latent variable. In addition, presence of both network and temporal dependence reduces the data to a single observation. As testing entire populations regularly is neither efficient nor feasible, standard approaches to testing recommend simple rule-based testing strategies (e.g., symptom based, contact tracing), without taking into account individual risk. In this work, we study an adaptive sequential design involving n individuals over a period of {\tau} time-steps, which allows for unspecified dependence among individuals and across time. Our causal target parameter is the mean latent outcome we would have obtained after one time-step, if, starting at time t given the observed past, we had carried out a stochastic intervention that maximizes the outcome under a resource constraint. We propose an Online Super Learner for adaptive sequential surveillance that learns the optimal choice of tests strategies over time while adapting to the current state of the outbreak. Relying on a series of working models, the proposed method learns across samples, through time, or both: based on the underlying (unknown) structure in the data. We present an identification result for the latent outcome in terms of the observed data, and demonstrate the superior performance of the proposed strategy in a simulation modeling a residential university environment during the COVID-19 pandemic.
translated by 谷歌翻译
Supervised machine learning-based medical image computing applications necessitate expert label curation, while unlabelled image data might be relatively abundant. Active learning methods aim to prioritise a subset of available image data for expert annotation, for label-efficient model training. We develop a controller neural network that measures priority of images in a sequence of batches, as in batch-mode active learning, for multi-class segmentation tasks. The controller is optimised by rewarding positive task-specific performance gain, within a Markov decision process (MDP) environment that also optimises the task predictor. In this work, the task predictor is a segmentation network. A meta-reinforcement learning algorithm is proposed with multiple MDPs, such that the pre-trained controller can be adapted to a new MDP that contains data from different institutes and/or requires segmentation of different organs or structures within the abdomen. We present experimental results using multiple CT datasets from more than one thousand patients, with segmentation tasks of nine different abdominal organs, to demonstrate the efficacy of the learnt prioritisation controller function and its cross-institute and cross-organ adaptability. We show that the proposed adaptable prioritisation metric yields converging segmentation accuracy for the novel class of kidney, unseen in training, using between approximately 40\% to 60\% of labels otherwise required with other heuristic or random prioritisation metrics. For clinical datasets of limited size, the proposed adaptable prioritisation offers a performance improvement of 22.6\% and 10.2\% in Dice score, for tasks of kidney and liver vessel segmentation, respectively, compared to random prioritisation and alternative active sampling strategies.
translated by 谷歌翻译
Artificial Intelligence (AI) is one of the most transformative technologies of the 21st century. The extent and scope of future AI capabilities remain a key uncertainty, with widespread disagreement on timelines and potential impacts. As nations and technology companies race toward greater complexity and autonomy in AI systems, there are concerns over the extent of integration and oversight of opaque AI decision processes. This is especially true in the subfield of machine learning (ML), where systems learn to optimize objectives without human assistance. Objectives can be imperfectly specified or executed in an unexpected or potentially harmful way. This becomes more concerning as systems increase in power and autonomy, where an abrupt capability jump could result in unexpected shifts in power dynamics or even catastrophic failures. This study presents a hierarchical complex systems framework to model AI risk and provide a template for alternative futures analysis. Survey data were collected from domain experts in the public and private sectors to classify AI impact and likelihood. The results show increased uncertainty over the powerful AI agent scenario, confidence in multiagent environments, and increased concern over AI alignment failures and influence-seeking behavior.
translated by 谷歌翻译
在医学图像分析中需要进行几次学习的能力是对支持图像数据的有效利用,该数据被标记为对新类进行分类或细分新类,该任务否则需要更多的培训图像和专家注释。这项工作描述了一种完全3D原型的几种分段算法,因此,训练有素的网络可以有效地适应培训中缺乏的临床有趣结构,仅使用来自不同研究所的几个标记图像。首先,为了弥补机构在新型类别的情节适应中的广泛认识的空间变异性,新型的空间注册机制被整合到原型学习中,由分割头和空间对齐模块组成。其次,为了帮助训练观察到的不完美比对,提出了支持掩模调节模块,以进一步利用支持图像中可用的注释。使用589个骨盆T2加权MR图像的数据集分割了八个对介入计划的解剖结构的应用,该实验是针对介入八个机构的八个解剖结构的应用。结果证明了3D公式中的每种,空间登记和支持掩模条件的功效,所有这些条件都独立或集体地做出了积极的贡献。与先前提出的2D替代方案相比,不管支持数据来自相同还是不同的机构,都具有统计学意义的少量分割性能。
translated by 谷歌翻译
自动摘要评估对于机器生成和人为生产的摘要都有用。自动评估给定文档的摘要文本启用,例如,摘要生成系统开发和检测不适当的摘要。摘要评估可以以多种模式进行:排名摘要生成系统;对特定文档的排名摘要;并在绝对规模上估算文档 - 苏格尔对的质量。带有注释的现有数据集用于摘要评估,通常基于新闻摘要数据集,例如CNN/DailyMail或XSUM。在这项工作中,我们描述了一个新的数据集,即播客摘要评估语料库,这是由TREC2020的人类专家评估的播客摘要集。与现有的摘要评估数据相比,该数据集具有两个独特的方面:(i)基于语音播客的长输入,文档; (ii)有机会在播客语料库中检测不适当的参考摘要。首先,我们检查了现有的评估方法,包括无模型和基于模型的方法,并为此长输入摘要评估数据集提供基准结果。其次,为了过滤参考参考文献配对以进行培训,我们采用摘要评估进行数据选择。这两个方面的实验结果为摘要评估和发电任务提供了有趣的见解。播客摘要评估数据可用。
translated by 谷歌翻译
对自我监督学习(SSL)的最新分析发现,以下以数据为中心的属性对于学习良好表示至关重要:对任务 - 无关紧要的语义的不变性,在某些潜在空间中的类别可分离性以及从增强样品中可恢复标签的类别。但是,鉴于它们的离散,非欧成功的性质,图形数据集和图SSL方法不太可能满足这些属性。这提出了一个问题:如何绘制SSL方法(例如对比度学习(CL))如何工作?为了系统地探究这个问题,我们在使用通用图扩展(GGAS)时对CL进行概括分析,重点是以数据为中心的属性。我们的分析对GGA的局限性以及与任务相关的增强的必要性产生了正式见解。正如我们经验表明的那样,GGA不会在共同基准数据集上引起与任务相关的不变性,这只会导致对天真的,未经训练的基线的边际收益。我们的理论激发了合成数据生成过程,该过程能够控制与任务相关的信息并拥有预定义的最佳增强。这种灵活的基准测试有助于我们确定高级增强技术(例如自动化方法)中未认可的限制。总体而言,我们的工作在经验和理论上都严格地对以数据为中心的属性对图形SSL的增强策略和学习范式的影响进行了严格的背景。
translated by 谷歌翻译
模型不可知的元学习算法旨在从几个观察到的任务中推断出先验,然后可以使用这些任务来适应新任务,但很少有示例。鉴于在现有基准中产生的任务的固有多样性,最近的方法使用单独的可学习结构,例如层次结构或图形,以实现对先验的特定任务适应。尽管这些方法产生了明显更好的元学习者,但我们的目标是在异质任务分配包含具有挑战性的分布变化和语义差异时提高其性能。为此,我们介绍了CAML(对比知识增强的元学习),这是一种新颖的方法,用于知识增强的几次学习,它演变了知识图以有效地编码历史经验,并采用了对比性的蒸馏策略,以利用编码的知识来为基础学习者的任务感知调制。使用标准基准测试,我们在不同的几次学习方案中评估CAML的性能。除了标准的少量任务适应外,我们还考虑了我们的经验研究中更具挑战性的多域任务适应和少数数据集泛化设置。我们的结果表明,CAML始终胜过最知名的方法,并实现了改善的概括。
translated by 谷歌翻译